Improving the Robustness of Adversarial Attacks Using an Affine-Invariant Gradient Estimator

نویسندگان

چکیده

As designers of artificial intelligence try to outwit hackers, both sides continue hone in on AI's inherent vulnerabilities. Designed and trained from certain statistical distributions data, deep neural networks (DNNs) remain vulnerable deceptive inputs that violate a DNN's statistical, predictive assumptions. Before being fed into network, however, most existing adversarial examples cannot maintain malicious functionality when applied an affine transformation. For practical purposes, maintaining serves as important measure the robustness attacks. To help DNNs learn defend themselves more thoroughly against attacks, we propose affine-invariant attack, which can consistently produce robust over transformations. efficiency, disentangle current affine-transformation strategies Euclidean geometry coordinate plane with its geometric translations, rotations dilations; reformulate latter two polar coordinates. Afterwards, construct gradient estimator by convolving at original image derived kernels, be integrated any gradient-based attack methods. Extensive experiments ImageNet, including some under physical condition, demonstrate our method significantly improve invariance and, byproduct, transferability examples, compared alternative state-of-the-art

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization

Deep neural networks have lately shown tremendous performance in various applications including vision and speech processing tasks. However, alongside their ability to perform these tasks with such high accuracy, it has been shown that they are highly susceptible to adversarial attacks: a small change of the input would cause the network to err with high confidence. This phenomenon exposes an i...

متن کامل

Improving Network Robustness against Adversarial Attacks with Compact Convolution

Though Convolutional Neural Networks (CNNs) have surpassed human-level performance on tasks such as object classification and face verification, they can easily be fooled by adversarial attacks. These attacks add a small perturbation to the input image that causes the network to mis-classify the sample. In this paper, we focus on neutralizing adversarial attacks by compact feature learning. In ...

متن کامل

On the Robustness of Semantic Segmentation Models to Adversarial Attacks

Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic...

متن کامل

Parseval Networks: Improving Robustness to Adversarial Examples

We introduce Parseval networks, a form of deep neural networks in which the Lipschitz constant of linear, convolutional and aggregation layers is constrained to be smaller than 1. Parseval networks are empirically and theoretically motivated by an analysis of the robustness of the predictions made by deep neural networks when their input is subject to an adversarial perturbation. The most impor...

متن کامل

Parseval Networks: Improving Robustness to Adversarial Examples

We introduce Parseval networks, a form of deep neural networks in which the Lipschitz constant of linear, convolutional and aggregation layers is constrained to be smaller than 1. Parseval networks are empirically and theoretically motivated by an analysis of the robustness of the predictions made by deep neural networks when their input is subject to an adversarial perturbation. The most impor...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Social Science Research Network

سال: 2022

ISSN: ['1556-5068']

DOI: https://doi.org/10.2139/ssrn.4095198